9 research outputs found

    SoundBar: exploiting multiple views in multimodal graph browsing

    Get PDF
    In this paper we discuss why access to mathematical graphs is problematic for visually impaired people. By a review of graph understanding theory and interviews with visually impaired users, we explain why current non-visual representations are unlikely to provide effective access to graphs. We propose the use of multiple views of the graph, each providing quick access to specific information as a way to improve graph usability. We then introduce a specific multiple view system to improve access to bar graphs called SoundBar which provides an additional quick audio overview of the graph. An evaluation of SoundBar revealed that additional views significantly increased accuracy and reduced time taken in a question answering task

    DOLPHIN: the design and initial evaluation of multimodal focus and context

    Get PDF
    In this paper we describe a new focus and context visualisation technique called multimodal focus and context. This technique uses a hybrid visual and spatialised audio display space to overcome the limited visual displays of mobile devices. We demonstrate this technique by applying it to maps of theme parks. We present the results of an experiment comparing multimodal focus and context to a purely visual display technique. The results showed that neither system was significantly better than the other. We believe that this is due to issues involving the perception of multiple structured audio sources

    MultiVis: improving access to visualisations for visually impaired people

    Get PDF
    This paper illustrates work undertaken on the MultiVis project to allow visually impaired users both to construct and browse mathematical graphs effectively. We start by discussing the need for such work, before discussing some of the problems of current technology. We then discuss Graph Builder, a novel tool to allow interactive graph construction, and Sound Bar which provides quick overview access to bar graphs

    Olfoto: designing a smell-based interaction

    Get PDF
    We present a study into the use of smell for searching digi-tal photo collections. Many people now have large photo libraries on their computers and effective search tools are needed. Smell has a strong link to memory and emotion so may be a good way to cue recall when searching. Our study compared text and smell based tagging. For the first stage we generated a set of smell and tag names from user de-scriptions of photos, participants then used these to tag pho-tos, returning two weeks later to answer questions on their photos. Results showed that participants could tag effec-tively with text labels, as this is a common and familiar task. Performance with smells was lower but participants performed significantly above chance, with some partici-pants using smells well. This suggests that smell has poten-tial. Results also showed that some smells were consistently identified and useful, but some were not and highlighted issues with smell delivery devices. We also discuss some practical issues of using smell for interaction

    PULSE: the design and evaluation of an auditory display to provide a social vibe

    No full text
    We present PULSE, a mobile application designed to allow users to gain a `vibe', an intrinsic understanding of the people, places and activities around their current location, derived from messages on the Twitter social networking site. We compared two auditory presentations of the vibe. One presented message metadata implicitly through modification of spoken message attributes. The other presented the same metadata, but through additional auditory cues. We compared both techniques in a lab and real world study. Additional auditory cues were found to allow for smaller changes in metadata to be more accurately detected, but were least preferred when PULSE was used in context. Results also showed that PULSE enhanced and shaped user understanding, with audio presentation allowing a closer coupling of digital data to the physical world

    DigiGraff: considering graffiti as a location based social network

    No full text
    We introduce DigiGraff: a technique to allow lightweight and unconstrained digital annotation of the physical environment via mobile digital projection. Using graffiti as a design meme, DigiGraff provides a way to study the role of location in the creation and browsing of social media, and introduces concepts of temporality, ageing and wear into message presentation. As the volume of geo-tagged social media increases, we outline why such consideration is relevant and important, and how DigiGraff will support deeper understanding of location data in social media

    An Audio-Haptic Interface Concept Based on Depth Information

    No full text
    We present an interaction tool based on rendering distance cues for ordering sound sources in depth. The user interface consists of a linear position tactile sensor made by conductive material. The touch position is mapped onto the listening position on a rectangular virtual membrane, modeled by a bidimensional Digital Waveguide Mesh and providing distance cues. Spatialization of sound sources in depth allows a hierarchical display of multiple audio streams, as in auditory menus. Besides, the similar geometries of the haptic interface and the virtual auditory environment allow a direct mapping between the touch position and the listening position, providing an intuitive and continuous interaction tool for auditory navigation
    corecore